Current Issue : July-September Volume : 2022 Issue Number : 3 Articles : 5 Articles
The proposed assistive hybrid brain-computer interface (BCI) semiautonomous mobile robotic arm demonstrates a design that is (1) adaptable by observing environmental changes with sensors and deploying alternate solutions and (2) versatile by receiving commands from the user’s brainwave signals through a noninvasive electroencephalogram cap. Composed of three integrated subsystems, a hybrid BCI controller, an omnidirectional mobile base, and a robotic arm, the proposed robot has commands mapped to the user’s brainwaves related to a set of specific physical or mental tasks. Theimplementation of sensors and the camera systems enable both the mobile base and the arm to be semiautonomous. The mobile base’s SLAM algorithm has obstacle avoidance capability and path planning to assist the robot maneuver safely. The robot arm calculates and deploys the necessary joint movement to pick up or drop off a desired object selected by the user via a brainwave controlled cursor on a camera feed. Validation, testing, and implementation of the subsystems were conducted using Gazebo. Communication between the BCI controller and the subsystems is tested independently. A loop of prerecorded brainwave data related to each specific task is used to ensure that the mobile base command is executed; the same prerecorded file is used to move the robot arm cursor and initiate a pick-up or drop-off action. A final system test is conducted where the BCI controller input moves the cursor and selects a goal point. Successful virtual demonstrations of the assistive robotic arm show the feasibility of restoring movement capability and autonomy for a disabled user....
Emotion recognition is one of the trending research fields. It is involved in several applications. Its most interesting applications include robotic vision and interactive robotic communication. Human emotions can be detected using both speech and visual modalities. Facial expressions can be considered as ideal means for detecting the persons' emotions. This paper presents a realtime approach for implementing emotion detection and deploying it in the robotic vision applications. The proposed approach consists of four phases: preprocessing, key point generation, key point selection and angular encoding, and classification. The main idea is to generate key points using MediaPipe face mesh algorithm, which is based on real-time deep learning. In addition, the generated key points are encoded using a sequence of carefully designed mesh generator and angular encoding modules. Furthermore, feature decomposition is performed using Principal Component Analysis (PCA). This phase is deployed to enhance the accuracy of emotion detection. Finally, the decomposed features are enrolled into a Machine Learning (ML) technique that depends on a Support Vector Machine (SVM), k-Nearest Neighbor (KNN), Na¨ıve Bayes (NB), Logistic Regression (LR), or Random Forest (RF) classifier. Moreover, we deploy a Multilayer Perceptron (MLP) as an efficient deep neural network technique. The presented techniques are evaluated on different datasets with different evaluation metrics. The simulation results reveal that they achieve a superior performance with a human emotion detection accuracy of 97%, which ensures superiority among the efforts in this field....
Automatic speech recognition is essential for establishing natural communication with a human–computer interface. Speech recognition accuracy strongly depends on the complexity of language. Highly inflected word forms are a type of unit present in some languages. The acoustic background presents an additional important degradation factor influencing speech recognition accuracy. While the acoustic background has been studied extensively, the highly inflected word forms and their combined influence still present a major research challenge. Thus, a novel type of analysis is proposed, where a dedicated speech database comprised solely of highly inflected word forms is constructed and used for tests. Dedicated test sets with various acoustic backgrounds were generated and evaluated with the Slovenian UMB BN speech recognition system. The baseline word accuracy of 93.88% and 98.53% was reduced to as low as 23.58% and 15.14% for the various acoustic backgrounds. The analysis shows that the word accuracy degradation depends on and changes with the acoustic background type and level. The highly inflected word forms’ test sets without background decreased word accuracy from 93.3% to only 63.3% in the worst case. The impact of highly inflected word forms on speech recognition accuracy was reduced with the increased levels of acoustic background and was, in these cases, similar to the non-highly inflected test sets. The results indicate that alternative methods in constructing speech databases, particularly for low-resourced Slovenian language, could be beneficial....
Brain-computer interfaces (BCIs), a new type of rehabilitation technology, pick up nerve cell signals, identify and classify their activities, and convert them into computer-recognized instructions. This technique has been widely used in the rehabilitation of stroke patients in recent years and appears to promote motor function recovery after stroke. At present, the application of BCI in poststroke cognitive impairment is increasing, which is a common complication that also affects the rehabilitation process. This paper reviews the promise and potential drawbacks of using BCI to treat poststroke cognitive impairment, providing a solid theoretical basis for the application of BCI in this area....
A brain–computer interface (BCI) is a promising technology that can analyze brain signals and control a robot or computer according to a user’s intention. This paper introduces our studies to overcome the challenges of using BCIs in daily life. There are several methods to implement BCIs, such as sensorimotor rhythms (SMR), P300, and steady-state visually evoked potential (SSVEP). These methods have different pros and cons according to the BCI type. However, all these methods are limited in choice. Controlling the robot arm according to the intention enables BCI users can do various things. We introduced the study predicting three-dimensional arm movement using a non-invasive method. Moreover, the study was described compensating the prediction using an external camera for high accuracy. For daily use, BCI users should be able to turn on or off the BCI system because of the prediction error. The users should also be able to change the BCI mode to the efficient BCI type. The BCI mode can be transformed based on the user state. Our study was explained estimating a user state based on a brain’s functional connectivity and a convolutional neural network (CNN). Additionally, BCI users should be able to do various tasks, such as carrying an object, walking, or talking simultaneously. A multi-function BCI study was described to predict multiple intentions simultaneously through a single classification model. Finally, we suggest our view for the future direction of BCI study. Although there are still many limitations when using BCI in daily life, we hope that our studies will be a foundation for developing a practical BCI system....
Loading....